多模式培训的最新进展使用文本描述,可以显着增强机器对图像和视频的理解。然而,目前尚不清楚语言在多大程度上可以完全捕捉不同方式的感官体验。一种表征感官体验的良好方法取决于相似性判断,即人们认为两个截然不同的刺激是相似的程度。我们在一系列大规模的行为研究($ n = 1,823美元的参与者)中探讨了人类相似性判断与语言之间的关系,这三种模式(图像,音频和视频)和两种类型的文本描述符:简单的文字描述符: - 文本字幕。在此过程中,我们引入了一条新型的自适应管道,用于标签挖掘,既有高效又是领域。我们表明,基于文本描述符的预测管道表现出色,我们将其与基于视觉,音频和视频处理体系结构的611基线模型进行了比较。我们进一步表明,文本描述符和模型在多种方式之间和模型之间预测人类相似性的程度各不相同。综上所述,这些研究说明了整合机器学习和认知科学方法的价值,以更好地了解人类和机器表示之间的相似性和差异。我们在https://words-are-are-all-you-need.s3.amazonaws.com/index.html上介绍了交互式可视化,以探索人类所经历的刺激和本文中报道的不同方法之间的相似性。
translated by 谷歌翻译
获得抽象知识的能力是人类智力的标志,许多人认为是人类和神经网络模型之间的核心差异之一。代理可以通过元学习对抽象的归纳偏见,在那里他们接受了共享可以学习和应用的一些抽象结构的任务分布的培训。但是,由于很难解释神经网络,因此很难判断代理人是学会了潜在的抽象,或者是该抽象特征的统计模式。在这项工作中,我们比较了人类和代理在荟萃方面学习范式中的表现,其中从抽象规则中产生了任务。我们定义了一种用于构建“任务Metamers”的新方法,该方法与抽象任务的统计数据非常匹配,但使用了不同的基本生成过程,并评估了在抽象和Metamer任务上的性能。在我们的第一组实验中,我们发现人类在抽象任务上的表现要比MetAmer任务更好,而广泛使用的元强化学习代理在抽象任务上的表现要比匹配的Metamers差。在第二组实验中,我们将任务基于直接从经验鉴定的人类先验得出的抽象基础。我们利用相同的过程来生成相应的METAMER任务,并看到人与代理之间的相同双重分离。这项工作为表征人类和机器学习之间的差异奠定了基础,可以在未来的工作中用于以人类行为开发机器。
translated by 谷歌翻译
The automated synthesis of correct-by-construction Boolean functions from logical specifications is known as the Boolean Functional Synthesis (BFS) problem. BFS has many application areas that range from software engineering to circuit design. In this paper, we introduce a tool BNSynth, that is the first to solve the BFS problem under a given bound on the solution space. Bounding the solution space induces the synthesis of smaller functions that benefit resource constrained areas such as circuit design. BNSynth uses a counter-example guided, neural approach to solve the bounded BFS problem. Initial results show promise in synthesizing smaller solutions; we observe at least \textbf{3.2X} (and up to \textbf{24X}) improvement in the reduction of solution size on average, as compared to state of the art tools on our benchmarks. BNSynth is available on GitHub under an open source license.
translated by 谷歌翻译
In recent years, denoising diffusion models have demonstrated outstanding image generation performance. The information on natural images captured by these models is useful for many image reconstruction applications, where the task is to restore a clean image from its degraded observations. In this work, we propose a conditional sampling scheme that exploits the prior learned by diffusion models while retaining agreement with the observations. We then combine it with a novel approach for adapting pretrained diffusion denoising networks to their input. We examine two adaption strategies: the first uses only the degraded image, while the second, which we advocate, is performed using images that are ``nearest neighbors'' of the degraded image, retrieved from a diverse dataset using an off-the-shelf visual-language model. To evaluate our method, we test it on two state-of-the-art publicly available diffusion models, Stable Diffusion and Guided Diffusion. We show that our proposed `adaptive diffusion for image reconstruction' (ADIR) approach achieves a significant improvement in the super-resolution, deblurring, and text-based editing tasks.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
Business documents come in a variety of structures, formats and information needs which makes information extraction a challenging task. Due to these variations, having a document generic model which can work well across all types of documents and for all the use cases seems far-fetched. For document-specific models, we would need customized document-specific labels. We introduce DoSA (Document Specific Automated Annotations), which helps annotators in generating initial annotations automatically using our novel bootstrap approach by leveraging document generic datasets and models. These initial annotations can further be reviewed by a human for correctness. An initial document-specific model can be trained and its inference can be used as feedback for generating more automated annotations. These automated annotations can be reviewed by human-in-the-loop for the correctness and a new improved model can be trained using the current model as pre-trained model before going for the next iteration. In this paper, our scope is limited to Form like documents due to limited availability of generic annotated datasets, but this idea can be extended to a variety of other documents as more datasets are built. An open-source ready-to-use implementation is made available on GitHub https://github.com/neeleshkshukla/DoSA.
translated by 谷歌翻译
尽管深度神经网络(DNNS)具有很大的概括和预测能力,但它们的功能不允许对其行为进行详细的解释。不透明的深度学习模型越来越多地用于在关键环境中做出重要的预测,而危险在于,它们做出和使用不能合理或合法化的预测。已经出现了几种可解释的人工智能(XAI)方法,这些方法与机器学习模型分开了,但对模型的实际功能和鲁棒性具有忠诚的缺点。结果,就具有解释能力的深度学习模型的重要性达成了广泛的协议,因此他们自己可以为为什么做出特定的预测提供答案。首先,我们通过形式化解释是什么是缺乏XAI的普遍标准的问题。我们还引入了一组公理和定义,以从数学角度阐明XAI。最后,我们提出了Greybox XAI,该框架由于使用了符号知识库(KB)而构成DNN和透明模型。我们从数据集中提取KB,并使用它来训练透明模型(即逻辑回归)。在RGB图像上训练了编码器 - 编码器架构,以产生类似于透明模型使用的KB的输出。一旦两个模型被独立训练,它们就会在组合上使用以形成可解释的预测模型。我们展示了这种新体系结构在几个数据集中如何准确且可解释的。
translated by 谷歌翻译
扩散模型是一类生成模型,与其他生成模型相比,在自然图像数据集训练时,在创建逼真的图像时表现出了出色的性能。我们引入了Dispr,这是一个基于扩散的模型,用于解决从二维(2D)单细胞显微镜图像预测三维(3D)细胞形状的反问题。使用2D显微镜图像作为先验,因此可以根据预测现实的3D形状重建条件。为了在基于功能的单细胞分类任务中展示DIPPR作为数据增强工具的适用性,我们从分组为六个高度不平衡类的单元中提取形态特征。将DISPR预测的功能添加到三个少数类别,将宏F1分数从$ f1_ \ text {macro} = 55.2 \ pm 4.6 \%$ to $ f1_ \%$ to $ f1_ \ text {macro} = 72.2 \ pm 4.9 \%$。由于我们的方法是在这种情况下第一个采用基于扩散的模型的方法,因此我们证明了扩散模型可以应用于3D中的反问题,并且他们学会了从2D显微镜图像中重建具有现实的形态特征的3D形状。
translated by 谷歌翻译
合成图像合成的巨大进展使得面部图像在高分辨率和光真实主义中产生。在生物识别应用中,使用合成数据的主要动机是解决公共可用生物识别数据的短缺,同时在处理此类敏感信息时降低隐私风险。这些优点在这项工作中被利用,通过模拟近期面部年龄修饰算法以生成交配样本,从而研究衰老对开源生物识别识别系统的性能的影响。此外,实际数据集用于评估短期衰老的影响,将生物识别性能与合成结构域进行比较。主要发现表明,短期老化在1 - 5年的范围内仅对一般识别绩效产生较小的影响。但是,对长期年龄差异超过20年的配对面的正确验证仍然是一个重大挑战,需要进一步调查。
translated by 谷歌翻译
本文介绍了基于2022年国际生物识别技术联合会议(IJCB 2022)举行的基于隐私感知合成训练数据(SYN-MAD)的面部变形攻击检测的摘要。该竞赛吸引了来自学术界和行业的12个参与团队,并在11个不同的国家 /地区举行。最后,参与团队提交了七个有效的意见书,并由组织者进行评估。竞争是为了介绍和吸引解决方案的解决方案,这些解决方案涉及检测面部变形攻击的同时,同时出于道德和法律原因保护人们的隐私。为了确保这一点,培训数据仅限于组织者提供的合成数据。提交的解决方案提出了创新,导致在许多实验环境中表现优于所考虑的基线。评估基准现在可在以下网址获得:https://github.com/marcohuber/syn-mad-2022。
translated by 谷歌翻译